skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Liu, Rongjie"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. ABSTRACT Mediation analysis is widely utilized in neuroscience to investigate the role of brain image phenotypes in the neurological pathways from genetic exposures to clinical outcomes. However, it is still difficult to conduct mediation analyses with whole genome‐wide exposures and brain subcortical shape mediators due to several challenges including (i) large‐scale genetic exposures, that is, millions of single‐nucleotide polymorphisms (SNPs); (ii) nonlinear Hilbert space for shape mediators; and (iii) statistical inference on the direct and indirect effects. To tackle these challenges, this paper proposes a genome‐wide mediation analysis framework with brain subcortical shape mediators. First, to address the issue caused by the high dimensionality in genetic exposures, a fast genome‐wide association analysis is conducted to discover potential genetic variants with significant genetic effects on the clinical outcome. Second, the square‐root velocity function representations are extracted from the brain subcortical shapes, which fall in an unconstrained linear Hilbert subspace. Third, to identify the underlying causal pathways from the detected SNPs to the clinical outcome implicitly through the shape mediators, we utilize a shape mediation analysis framework consisting of a shape‐on‐scalar model and a scalar‐on‐shape model. Furthermore, the bootstrap resampling approach is adopted to investigate both global and spatial significant mediation effects. Finally, our framework is applied to the corpus callosum shape data from the Alzheimer's Disease Neuroimaging Initiative. 
    more » « less
    Free, publicly-accessible full text available August 1, 2026
  2. Large-scale imaging studies often face challenges stemming from heterogeneity arising from differences in geographic location, instrumental setups, image acquisition protocols, study design, and latent variables that remain undisclosed. While numerous regression models have been developed to elucidate the interplay between imaging responses and relevant covariates, limited attention has been devoted to cases where the imaging responses pertain to the domain of shape. This adds complexity to the problem of imaging heterogeneity, primarily due to the unique properties inherent to shape representations, including nonlinearity, high-dimensionality, and the intricacies of quotient space geometry. To tackle this intricate issue, we propose a novel approach: a shape-on-scalar regression model that incorporates confounder adjustment. In particular, we leverage the square root velocity function to extract elastic shape representations which are embedded within the linear Hilbert space of square integrable functions. Subsequently, we introduce a shape regression model aimed at characterizing the intricate relationship between elastic shapes and covariates of interest, all while effectively managing the challenges posed by imaging heterogeneity. We develop comprehensive procedures for estimating and making inferences about the unknown model parameters. Through real-data analysis, our method demonstrates its superiority in terms of estimation accuracy when compared to existing approaches. 
    more » « less
  3. The problem of using covariates to predict shapes of objects in a regression setting is important in many fields. A formal statistical approach, termed Geodesic regression model, is commonly used for modeling and analyzing relationships between Euclidean predictors and shape responses. Despite its popularity, this model faces several key challenges, including (i) misalignment of shapes due to pre-processing steps, (ii) difficulties in shape alignment due to imaging heterogeneity, and (iii) lack of spatial correlation in shape structures. This paper proposes a comprehensive geodesic factor regression model that addresses all these challenges. Instead of using shapes as extracted from pre-registered data, it takes a more fundamental approach, incorporating alignment step within the proposed regression model and learns them using both pre-shape and covariate data. Additionally, it specifies spatial correlation structures using low-dimensional representations, including latent factors on the tangent space and isotropic error terms. The proposed framework results in substantial improvements in regression performance, as demonstrated through simulation studies and a real data analysis on Corpus Callosum contour data obtained from the ADNI study. 
    more » « less
  4. Fast and effective image compression for multi- dimensional images has become increasingly important for efficient storage and transfer of massive amounts of high- resolution images and videos. Desirable properties in com- pression methods include (1) high reconstruction quality at a wide range of compression rates while preserving key local details, (2) computational scalability, (3) applicability to a variety of different image/video types and of different dimen- sions, and (4) ease of tuning. We present such a method for multi-dimensional image compression called Compression via Adaptive Recursive Partitioning (CARP). CARP uses an optimal permutation of the image pixels inferred from a Bayesian probabilistic model on recursive partitions of the image to reduce its effective dimensionality, achieving a par- simonious representation that preserves information. CARP uses a multi-layer Bayesian hierarchical model to achieve self-tuning and regularization to avoid overfitting—resulting in one single parameter to be specified by the user to achieve the desired compression rate. Extensive numerical experi- ments using a variety of datasets including 2D ImageNet, 3D medical image, and real-life YouTube and surveillance videos show that CARP dominates the state-of-the-art com- pression approaches—including JPEG, JPEG2000, MPEG4, and a neural network-based method—for all of these dif- ferent image types and often on nearly all of the individual images. 
    more » « less